513 research outputs found

    Width of the confinement-induced resonance in a quasi-one-dimensional trap with transverse anisotropy

    Full text link
    We theoretically study the width of the s-wave confinement-induced resonance (CIR) in quasi-one-dimensional atomic gases under tunable transversely anisotropic confinement. We find that the width of the CIR can be tuned by varying the transverse anisotropy. The change in the width of the CIR can manifest itself in the position of the discontinuity in the interaction energy density, which can be probed experimentally.Comment: 6 pages, 3 figures, update references, published versio

    Structure of Moves in Research Article Abstracts in Applied Linguistics

    Get PDF
    An abstract summarizes the accompanying article in order to promote it. While many move-analysis studies of abstracts in applied linguistics (AL) have used similar coding frameworks and demonstrated similar rhetorical organizations, their findings have not yet been aggregated to show the overall picture. The present study aimed to both examine move structures in AL abstracts and compare the results with previous studies both synchronically and diachronically. Fifty abstracts were collected from articles published in the journal English for Specific Purposes (ESP) between 2011 and 2013. Sentences were coded using a five-move scheme adapted from previous studies. Combining the results from previous research and the present study showed that most AL abstracts give information on the purpose, methodology, and findings of the associated article, while about half of the articles omit introduction of the topic and discussion of the findings. It was also found that authors frequently violate the move sequence expected by current schemes. These findings consistent with previous research suggest that future researchers informed by move analyses should explore the connection between the findings of move analyses and teaching materials for academic writing

    Emerging Paradigms of Neural Network Pruning

    Full text link
    Over-parameterization of neural networks benefits the optimization and generalization yet brings cost in practice. Pruning is adopted as a post-processing solution to this problem, which aims to remove unnecessary parameters in a neural network with little performance compromised. It has been broadly believed the resulted sparse neural network cannot be trained from scratch to comparable accuracy. However, several recent works (e.g., [Frankle and Carbin, 2019a]) challenge this belief by discovering random sparse networks which can be trained to match the performance with their dense counterpart. This new pruning paradigm later inspires more new methods of pruning at initialization. In spite of the encouraging progress, how to coordinate these new pruning fashions with the traditional pruning has not been explored yet. This survey seeks to bridge the gap by proposing a general pruning framework so that the emerging pruning paradigms can be accommodated well with the traditional one. With it, we systematically reflect the major differences and new insights brought by these new pruning fashions, with representative works discussed at length. Finally, we summarize the open questions as worthy future directions
    • …
    corecore